Skip to content

Conversation

@mitmul
Copy link
Contributor

@mitmul mitmul commented Oct 25, 2025

This PR adds a line to explicitly set res->t_embd with cur in the PLaMo2 model implementation.
Without this fix, the ollama always crashes because params.embeddings is hard-coded with true at ollama/llama/llama.go:L120 and it results in a call of this assertion:
https://github.com/ollama/ollama/blob/ad6f6a1d29f45a5c7266bcd7edb5671621e86810/llama/llama.cpp/src/llama-graph.cpp#L1894
then it fails because res->t_embd here:
https://github.com/ollama/ollama/blob/ad6f6a1d29f45a5c7266bcd7edb5671621e86810/llama/llama.cpp/src/llama-graph.cpp#L1882
is null with the current code of PLaMo2 model.

This PR intends to fix this problem.

Copy link
Collaborator

@CISC CISC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, thanks!

@CISC CISC merged commit 226f295 into ggml-org:master Oct 25, 2025
72 checks passed
@mitmul mitmul deleted the mitmul/put-value-to-t_embd branch October 25, 2025 10:41
wqerrewetw added a commit to wqerrewetw/llama.cpp that referenced this pull request Oct 25, 2025
* model-conversion : add trust_remote_code for orig model run [no ci] (ggml-org#16751)

This commit add the trust_remote_code=True argument when loading models
using AutoConfig, AutoTokenizer, and AutoModelForCausalLM for the run
original model script.

The motivation for this is that some models require custom code to be
loaded properly, and setting trust_remote_code=True avoids a prompt
asking for user confirmation:
```console
(venv) $ make causal-run-original-model
The repository /path/to/model contains custom code which must be
executed to correctly load the model. You can inspect the repository
content at /path/to/model.

Do you wish to run the custom code? [y/N] N
```

Having this as the default seems like a safe choice as we have to clone
or download the models we convert and would be expecting to run any
custom code they have.

* webui: support q URL parameter (ggml-org#16728)

* webui: support q URL parameter

Fixes ggml-org#16722
I’ve checked that it works with Firefox’s AI tools

* webui: apply suggestions from code review

Co-authored-by: Aleksander Grygier <[email protected]>

* chore: update webui static build

---------

Co-authored-by: Aleksander Grygier <[email protected]>

* CUDA: use CUB for arbitary size argsort (ggml-org#16754)

* ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (ggml-org#16742)

* Fix CUDA grid launch condition for large block_nums.y

* add backend ops test

* reduce test  repetitions

* convert : avoid dequantizing mxfp4 for GPT-OSS (ggml-org#16756)

* vulkan: Optimize SSM_SCAN (ggml-org#16645)

* vulkan: delete dead code (ggml-org#16732)

ggml_vk_create_buffer_temp is not used anywhere, and it is the only
caller for ggml_vk_pool_malloc.

Signed-off-by: Giuseppe Scrivano <[email protected]>

* model : set res->t_embd in PLaMo2 models (ggml-org#16766)

---------

Signed-off-by: Giuseppe Scrivano <[email protected]>
Co-authored-by: Daniel Bevenius <[email protected]>
Co-authored-by: Florian Badie <[email protected]>
Co-authored-by: Aleksander Grygier <[email protected]>
Co-authored-by: Aman Gupta <[email protected]>
Co-authored-by: leejet <[email protected]>
Co-authored-by: compilade <[email protected]>
Co-authored-by: Jeff Bolz <[email protected]>
Co-authored-by: Giuseppe Scrivano <[email protected]>
Co-authored-by: Shunta Saito <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants